13 research outputs found

    PETS2009 and Winter-PETS 2009 results: a combined evaluation

    Get PDF
    This paper presents the results of the crowd image analysis challenge of the Winter PETS 2009 workshop. The evaluation is carried out using a selection of the metrics developed in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium [13]. The evaluation highlights the detection and tracking performance of the authors’systems in areas such as precision, accuracy and robustness. The performance is also compared to the PETS 2009 submitted results

    Effective evaluation of privacy protection techniques in visible and thermal imagery

    Get PDF
    Privacy protection may be defined as replacing the original content in an image region with a new (less intrusive) content having modified target appearance information to make it less recognizable by applying a privacy protection technique. Indeed the development of privacy protection techniques needs also to be complemented with an established objective evaluation method to facilitate their assessment and comparison. Generally, existing evaluation methods rely on the use of subjective judgements or assume a specific target type in image data and use target detection and recognition accuracies to assess privacy protection. This paper proposes a new annotation-free evaluation method that is neither subjective nor assumes a specific target type. It assesses two key aspects of privacy protection: protection and utility. Protection is quantified as an appearance similarity and utility is measured as a structural similarity between original and privacy-protected image regions. We performed an extensive experimentation using six challenging datasets (having 12 video sequences) including a new dataset (having six sequences) that contains visible and thermal imagery. The new dataset is made available online for community. We demonstrate effectiveness of proposed method by evaluating six image-based privacy protection techniques, and also show comparisons of proposed method over existing methods

    Fusion of heterogenous sensor data in border surveillance

    Get PDF
    Wide area surveillance has become of critical importance particularly for border control between countries where vast forested land border areas are to be monitored. In this paper we address the problem of the automatic detection of activity in forbidden areas, namely forested land border areas. In order to avoid false detections, often triggered in dense vegetation with single sensors such as radar, in this paper we present a multi sensor fusion and tracking system using passive infrared detectors in combination with automatic person detection from thermal and visual video camera images. The approach combines weighted maps with a rule engine that associates data from multiple weighted maps. The proposed approach is tested on real data collected by the EU FOLDOUT project in a location representative of a range of forested EU borders. The results show that the proposed approach can eliminate single-sensor false detections and enhance accuracy by up to 50%

    Robust abandoned object detection integrating wide area visual surveillance and social context

    Get PDF
    This paper presents a video surveillance framework that robustly and efficiently detects abandoned objects in surveillance scenes. The framework is based on a novel threat assessment algorithm which combines the concept of ownership with automatic understanding of social relations in order to infer abandonment of objects. Implementation is achieved through development of a logic-based inference engine based on Prolog. Threat detection performance is conducted by testing against a range of datasets describing realistic situations and demonstrates a reduction in the number of false alarms generated. The proposed system represents the approach employed in the EU SUBITO project (Surveillance of Unattended Baggage and the Identification and Tracking of the Owner)

    PROTECT: pervasive and useR fOcused biomeTrics bordEr projeCT. A Case Study

    Get PDF
    PROTECT: Pervasive and useR fOcused biomeTrics bordEr projeCT is an EU project funded by the Horizon 2020 research and Innovation Programme. The main aim of PROTECT was to build an advanced biometric-based person identification system that works robustly across a range of border crossing types and that has strong user-centric features. This work presents the case study of the multibiometric verification system developed within PROTECT. The system has been developed to be suitable for different borders such as air, sea, and land borders. The system covers two use cases: the walk-through scenario, in which the traveller is on foot; the drive-through scenario, in which the traveller is in a vehicle. Each deployment includes a different set of biometric traits and this paper illustrates how to evaluate such multibiometric system in accordance with international standards and, in particular, how to overcome practical problems that may be encountered when dealing with multibiometric evaluation, such as different score distributions and missing scores

    Visual surveillance using 3D deformable models

    No full text
    SIGLEAvailable from British Library Document Supply Centre- DSC:DXN060730 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    PETS2010 and PETS2009 evaluation of results using individual ground truthed single views

    No full text
    This paper presents the results of the crowd image analysis challenge of the PETS2010 workshop. The evaluation was carried out using a selection of the metrics developed in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The PETS 2010 evaluation was performed using new ground truthing create from each independant two dimensional view. In addition, the performance of the submissions to the PETS 2009 and Winter-PETS 2009 were evaluated and included in the results. The evaluation highlights the detection and tracking performance of the authors’ systems in areas such as precision, accuracy and robustness

    PETS2010 and PETS2009 evaluation of results using individual ground truthed single views

    No full text
    This paper presents the results of the crowd image analysis challenge of the PETS2010 workshop. The evaluation was carried out using a selection of the metrics developed in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The PETS 2010 evaluation was performed using new ground truthing create from each independant two dimensional view. In addition, the performance of the submissions to the PETS 2009 and Winter-PETS 2009 were evaluated and included in the results. The evaluation highlights the detection and tracking performance of the authors’ systems in areas such as precision, accuracy and robustness

    Surveillance camera calibration from observations of a pedestrian

    No full text
    Calibrated cameras are an extremely useful resource for computer vision scenarios. Typically, cameras are calibrated through calibration targets, measurements of the observed scene, or self-calibrated through features matched between cameras with overlapping fields of view. This paper considers an approach to camera calibration based on observations of a pedestrian and compares the resulting calibration to a commonly used approach requiring that measurements be made of the scene

    A novel shape feature for fast region-based pedestrian recognition

    No full text
    A new class of shape features for region classification and high-level recognition is introduced. The novel Randomised Region Ray (RRR) features can be used to train binary decision trees for object category classification using an abstract representation of the scene. In particular we address the problem of human detection using an over segmented input image. We therefore do not rely on pixel values for training, instead we design and train specialised classifiers on the sparse set of semantic regions which compose the image. Thanks to the abstract nature of the input, the trained classifier has the potential to be fast and applicable to extreme imagery conditions. We demonstrate and evaluate its performance in people detection using a pedestrian dataset
    corecore